Goto

Collaborating Authors

 pattern separation


HiCL: Hippocampal-Inspired Continual Learning

Kapoor, Kushal, Mackey, Wyatt, Aloimonos, Yiannis, Lin, Xiaomin

arXiv.org Artificial Intelligence

We propose HiCL, a novel hippocampal-inspired dual-memory continual learning architecture designed to mitigate catastrophic forgetting by using elements inspired by the hippocampal circuitry. Our system encodes inputs through a grid-cell-like layer, followed by sparse pattern separation using a dentate gyrus-inspired module with top-k sparsity. Episodic memory traces are maintained in a CA3-like au-toassociative memory. Task-specific processing is dynamically managed via a DG-gated mixture-of-experts mechanism, wherein inputs are routed to experts based on cosine similarity between their normalized sparse DG representations and learned task-specific DG prototypes computed through online exponential moving averages. This biologically grounded yet mathematically principled gating strategy enables differentiable, scalable task-routing without relying on a separate gating network, and enhances the model's adaptability and efficiency in learning multiple sequential tasks. Cortical outputs are consolidated using Elastic Weight Consolidation weighted by inter-task similarity. Crucially, we incorporate prioritized replay of stored patterns to reinforce essential past experiences. Evaluations on standard continual learning benchmarks demonstrate the effectiveness of our architecture in reducing task interference, achieving near state-of-the-art results in continual learning tasks at lower computational costs. Our code is available here https://github.com/


A Neural Network Model of Complementary Learning Systems: Pattern Separation and Completion for Continual Learning

Jun, James P, Marupudi, Vijay, Shah, Raj Sanjay, Varma, Sashank

arXiv.org Artificial Intelligence

Learning new information without forgetting prior knowledge is central to human intelligence. In contrast, neural network models suffer from catastrophic forgetting: a significant degradation in performance on previously learned tasks when acquiring new information. The Complementary Learning Systems (CLS) theory offers an explanation for this human ability, proposing that the brain has distinct systems for pattern separation (encoding distinct memories) and pattern completion (retrieving complete memories from partial cues). To capture these complementary functions, we leverage the representational generalization capabilities of variational autoencoders (VAEs) and the robust memory storage properties of Modern Hopfield networks (MHNs), combining them into a neurally plausible continual learning model. We evaluate this model on the Split-MNIST task, a popular continual learning benchmark, and achieve close to state-of-the-art accuracy (~90%), substantially reducing forgetting. Representational analyses empirically confirm the functional dissociation: the VAE underwrites pattern completion, while the MHN drives pattern separation. By capturing pattern separation and completion in scalable architectures, our work provides a functional template for modeling memory consolidation, generalization, and continual learning in both biological and artificial systems.


Bridging Neuroscience and AI: Environmental Enrichment as a Model for Forward Knowledge Transfer

Saxena, Rajat, McNaughton, Bruce L.

arXiv.org Artificial Intelligence

Continual learning (CL) refers to an agent's capability to learn from a continuous stream of data and transfer knowledge without forgetting old information. One crucial aspect of CL is forward transfer, i.e., improved and faster learning on a new task by leveraging information from prior knowledge. While this ability comes naturally to biological brains, it poses a significant challenge for artificial intelligence (AI). Here, we suggest that environmental enrichment (EE) can be used as a biological model for studying forward transfer, inspiring human-like AI development. EE refers to animal studies that enhance cognitive, social, motor, and sensory stimulation and is a model for what, in humans, is referred to as 'cognitive reserve'. Enriched animals show significant improvement in learning speed and performance on new tasks, typically exhibiting forward transfer. We explore anatomical, molecular, and neuronal changes post-EE and discuss how artificial neural networks (ANNs) can be used to predict neural computation changes after enriched experiences. Finally, we provide a synergistic way of combining neuroscience and AI research that paves the path toward developing AI capable of rapid and efficient new task learning.


Person, Woman, Man, Camera, TV - Issue 93: Forerunners

Nautilus

Imagine that someone asked you to come up with a sequence of five words. In any other year, some idiosyncratic combination would likely come to mind. This year, though, one five-word sequence that has been etched into the memories of many Americans, and many worldwide, stands out--"person, woman, man, camera, TV." Donald Trump, touting his ability to memorize these words as part of a cognitive health test, made the sequence famous. We can tie together our personal experiences and acquired knowledge--such as this memory of Trump's behavior--into interconnected memories, recallable at a moment's notice.


How humans store memories is 'cornerstone of our intelligence'

Daily Mail - Science & tech

Scientists believe they may have discovered the'cornerstone of human intelligence', and it is all down to how we create and store memories. Previous research shows animals use a technique called'pattern separation' which stores memories in separate groups of neurons in the hippocampus. This stops them from getting mixed up, and it was believed humans probably use this technique as well. But a new study by experts at the University of Leicester shows the same group of neurons in the hippocampus store all memories. This key difference, the researchers say, could be the single factor which allowed our intellect to surpass that of other animals.


DeepMind's New Research on Linking Memories, and How It Applies to AI

#artificialintelligence

There's a cognitive quirk humans have that seems deceptively elementary. For example: every morning, you see a man in his 30s walking a boisterous collie. Then one day, a white-haired lady with striking resemblance comes down the street with the same dog. Subconsciously we immediately make a series of deductions: the man and woman might be from the same household. The lady may be the man's mother, or some other close relative.